8 research outputs found

    Learning probabilistic relational planning rules

    Get PDF
    To learn to behave in highly complex domains, agents must represent and learn compact models of the world dynamics. In this paper, we present an algorithm for learning probabilistic STRIPS-like planning operators from examples. We demonstrate the effective learning of rule-based operators for a wide range of traditional planning domains

    Logical particle filtering

    Get PDF
    Abstract. In this paper, we consider the problem of filtering in relational hidden Markov models. We present a compact representation for such models and an associated logical particle filtering algorithm. Each particle contains a logical formula that describes a set of states. The algorithm updates the formulae as new observations are received. Since a single particle tracks many states, this filter can be more accurate than a traditional particle filter in high dimensional state spaces, as we demonstrate in experiments. Consider an agent operating in a complex environment, made up of an unknown, possibly infinite, number of objects. The agent can take actions and make observations of the state of the world, and it knows a probabilistic model of how the state changes over time as a result of its actions and of how the observations are generated from the states. How can it efficiently estimate the underlying state of the environment? Filtering is the problem of predicting a distribution over the underlying environment state given a history of the agent’

    Kaelbling. Learning symbolic models of stochastic domains

    No full text
    In this article, we work towards the goal of developing agents that can learn to act in complex worlds. We develop a probabilistic, relational planning rule representation that compactly models noisy, nondeterministic action effects, and show how such rules can be effectively learned. Through experiments in simple planning domains and a 3D simulated blocks world with realistic physics, we demonstrate that this learning algorithm allows agents to effectively model world dynamics. 1

    Kaelbling. Learning planning rules in noisy stochastic worlds

    No full text
    We present an algorithm for learning a model of the effects of actions in noisy stochastic worlds. We consider learning in a 3D simulated blocks world with realistic physics. To model this world, we develop a planning representation with explicit mechanisms for expressing object reference and noise. We then present a learning algorithm that can create rules while also learning derived predicates, and evaluate this algorithm in the blocks world simulator, demonstrating that we can learn rules that effectively model the world dynamics

    Learning symbolic models of stochastic domains

    No full text
    In this article, we work towards the goal of developing agents that can learn to act in complex worlds. We develop a a new probabilistic planning rule representation to compactly model model noisy, nondeterministic action effects and show how these rules can be effectively learned. Through experiments in simple planning domains and a 3D simulated blocks world with realistic physics, we demonstrate that this learning algorithm allows agents to effectively model world dynamics
    corecore